# Instruction Tuning

Mistral Small 24B Instruct 2501
Mistral Small 24B is a large language model developed by the Mistral AI team, featuring 24 billion parameters and supporting multilingual conversation and instruction handling. Through instruction tuning, it generates high-quality text content applicable in various scenarios like chat, writing, and programming assistance. Its key advantages include powerful language generation capabilities, multilingual support, and efficient inference. This model caters to individuals and businesses requiring high-performance language processing, offers an open-source license, supports local deployment and quantization optimizations, making it suitable for scenarios with data privacy requirements.
Chatbot
57.4K

EXAONE 3.5 32B Instruct
EXAONE-3.5-32B-Instruct is a series of instruction-tuned bilingual (English and Korean) generation models developed by LG AI Research, consisting of variants ranging from 2.4B to 32B parameters. These models support long context processing of up to 32K tokens and exhibit state-of-the-art performance in real-world use cases and long context understanding, while remaining competitive in general domains compared to similarly sized recently released models.
AI Model
44.2K

EXAONE 3.5 2.4B Instruct GGUF
EXAONE-3.5-2.4B-Instruct-GGUF is a series of bilingual (English and Korean) generative models fine-tuned for instructions, developed by LG AI Research, with parameter sizes ranging from 2.4B to 32B. These models support long context processing of up to 32K tokens, demonstrating state-of-the-art performance in real-world use cases and long context comprehension, while remaining competitive in the general domain compared to similarly sized recently released models. The significance of this model lies in its optimization for deployment on small or resource-constrained devices, while delivering powerful performance.
AI Model
48.6K

Mammoth VL
MAmmoTH-VL is a large-scale multimodal reasoning platform that significantly enhances the performance of multimodal large language models (MLLMs) on various multimodal tasks through instruction tuning techniques. The platform has created a dataset consisting of 12 million instruction-response pairs using open models, covering a wide range of reasoning-intensive tasks and providing detailed and accurate reasoning steps. MAmmoTH-VL has achieved state-of-the-art performance on benchmarks such as MathVerse, MMMU-Pro, and MuirBench, showcasing its importance in education and research.
AI Model
46.4K

Llava Video
LLaVA-Video is a large multimodal model (LMM) focused on video instruction tuning, addressing the challenge of acquiring high-quality raw data from the internet by creating a high-quality synthetic dataset, LLaVA-Video-178K. This dataset includes detailed video descriptions, open-ended questions, and multiple-choice questions, aimed at enhancing the understanding and reasoning capabilities of video language models. The LLaVA-Video model has demonstrated outstanding performance across various video benchmarks, validating the effectiveness of its dataset.
AI Model
55.5K

Data Juicer
Data-Juicer is a comprehensive multimodal data processing system aimed at delivering higher quality, richer, and more digestible data for large language models (LLMs). It offers a systematic and reusable data processing library, supports collaborative development between data and models, allows rapid iteration through a sandbox lab, and provides features like data and model feedback loops, visualization, and multidimensional automated evaluation, helping users better understand and improve their data and models. Data-Juicer is actively maintained and regularly enhanced with more features, data recipes, and datasets.
AI Data Mining
61.8K
Fresh Picks

Llama3.1 8B Chinese Chat
Llama3.1-8B-Chinese-Chat is an instruction-tuned language model based on the Meta-Llama-3.1-8B-Instruct architecture, designed for both Chinese and English users. It offers multiple capabilities such as role-playing and tool usage. The model is fine-tuned using the ORPO algorithm, significantly reducing the incidence of Chinese questions answered in English and minimizing language mixing, particularly excelling in role-playing, functionality invocation, and mathematical reasoning.
AI Model
78.9K
Fresh Picks

Llama3.1 70B Chinese Chat
Llama3.1-70B-Chinese-Chat is an instruction-tuned language model based on the Meta-Llama-3.1-70B-Instruct model, specifically designed for Chinese and English bilingual users, with diverse capabilities such as role-playing and tool utilization. This model is fine-tuned using the ORPO algorithm, significantly reducing instances where Chinese questions receive English answers and cases of mixed language responses, particularly showing notable improvements in role-playing, function invocation, and mathematical abilities.
AI Model
73.1K
Fresh Picks

Gemma 2 27B Chinese Chat
Gemma-2-27B-Chinese-Chat is the first instruction-tuned language model based on google/gemma-2-27b-it, designed specifically for Chinese and English users. It possesses various capabilities, including role-playing and tool usage. Fine-tuned using the ORPO algorithm, the model has significantly improved performance in Chinese and English conversations, role-playing, and mathematical calculations.
AI Conversational Agents
71.2K

MG LLaVA
MG-LLaVA is a machine learning language model (MLLM) designed to enhance the visual processing capabilities of models. It achieves this by incorporating a multi-granularity visual pipeline, encompassing low-resolution, high-resolution, and object-centric features. An additional high-resolution visual encoder is introduced to capture finer details, and a Conv-Gate fusion network is used to integrate these high-resolution features with the base visual features. Furthermore, object-level features derived from offline detector bounding boxes are integrated to further refine the model's object recognition abilities. Trained via instruction tuning on publicly available multimodal data, MG-LLaVA exhibits exceptional perceptual skills.
AI Model
44.4K

Mistral 7B Instruct V0.2
Mistral-7B-Instruct-v0.2 is a large language model fine-tuned with instructions based on the Mistral-7B-v0.2 model. It features a 32k context window and a 1e6 Rope Theta value. The model can generate corresponding text outputs according to given instructions, supporting various tasks such as Q&A, writing, and translation. Through instruction-tuning, the model can better understand and execute instructions. Although the model currently lacks a targeted review mechanism, it will continue to be optimized to support deployment in more scenarios.
AI Model
81.4K

Gemma 7B IT
Gemma-7B-IT is an instruction tuning model developed by Google with a parameter count of 7B, utilizing the Gemini architecture and designed to enhance mathematical abilities, logical reasoning, and code generation. This model can run on a standard laptop without the need for substantial AI computing power, making it suitable for various application scenarios.
AI Model
85.6K
Featured AI Tools

Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
42.2K

Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
44.7K

Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
42.0K

Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
43.1K
Chinese Picks

Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
41.7K

Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
42.2K

Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
41.4K
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M